skip to main content


Search for: All records

Creators/Authors contains: "Sreekumar, Sreejith"

Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?

Some links on this page may take you to non-federal websites. Their policies may differ from this site.

  1. Statistical divergences (SDs), which quantify the dissimilarity between probability distributions, are a basic constituent of statistical inference and machine learning. A modern method for estimating those divergences relies on parametrizing an empirical variational form by a neural network (NN) and optimizing over parameter space. Such neural estimators are abundantly used in practice, but corresponding performance guarantees are partial and call for further exploration. We establish non-asymptotic absolute error bounds for a neural estimator realized by a shallow NN, focusing on four popular đť–ż-divergences---Kullback-Leibler, chi-squared, squared Hellinger, and total variation. Our analysis relies on non-asymptotic function approximation theorems and tools from empirical process theory to bound the two sources of error involved: function approximation and empirical estimation. The bounds characterize the effective error in terms of NN size and the number of samples, and reveal scaling rates that ensure consistency. For compactly supported distributions, we further show that neural estimators of the first three divergences above with appropriate NN growth-rate are minimax rate-optimal, achieving the parametric convergence rate. 
    more » « less
  2. Statistical distances (SDs), which quantify the dissimilarity between probability distributions, are central to machine learning and statistics. A modern method for estimating such distances from data relies on parametrizing a variational form by a neural network (NN) and optimizing it. These estimators are abundantly used in practice, but corresponding performance guarantees are partial and call for further exploration. In particular, there seems to be a fundamental tradeoff between the two sources of error involved: approximation and estimation. While the former needs the NN class to be rich and expressive, the latter relies on controlling complexity. This paper explores this tradeoff by means of non-asymptotic error bounds, focusing on three popular choices of SDs—Kullback-Leibler divergence, chi-squared divergence, and squared Hellinger distance. Our analysis relies on non-asymptotic function approximation theorems and tools from empirical process theory. Numerical results validating the theory are also provided. 
    more » « less
  3. In many information-theoretic channel coding problems, adding an input cost constraint to the operational setup amounts to restricting the optimization domain in the capacity formula. This paper shows that, in contrast to common belief, such a simple modification does not hold for the cost-constrained (CC) wiretap channel (WTC). The secrecy-capacity of the discrete memoryless (DM) WTC without cost constraints is described by a single auxiliary random variable. For the CC DM-WTC, however, we show that two auxiliaries are necessary to achieve capacity. Specifically, we first derive the secrecy-capacity formula, proving the direct part via superposition coding. Then, we provide an example of a CC DM-WTC whose secrecy-capacity cannot be achieved using a single auxiliary. This establishes the fundamental role of superposition coding over CC WTCs. 
    more » « less
  4. We consider the problem of soft-covering with constant composition superposition codes and characterize the optimal soft-covering exponent. A double-exponential concentration bound for deviation of the exponent from its mean is also established. We demonstrate an application of the result to achieving the secrecy-capacity region of a broadcast channel with confidential messages under a per-codeword cost constraint. This generalizes the recent characterization of the wiretap channel secrecy-capacity under an average cost constraint, highlighting the potential utility of the superposition soft-covering result to the analysis of coding problems. 
    more » « less